Add gemini-2.0-flash model to llama-stack config#28
Add gemini-2.0-flash model to llama-stack config#28omertuc merged 1 commit intorh-ecosystem-edge:mainfrom
Conversation
Signed-off-by: Eran Cohen <eranco@redhat.com>
|
Caution Review failedThe pull request is closed. WalkthroughA new model configuration for Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant InferenceStore
participant GeminiProvider
Client->>InferenceStore: Request inference using model_id: gemini/gemini-2.0-flash
InferenceStore->>GeminiProvider: Forward request to gemini/gemini-2.0-flash
GeminiProvider-->>InferenceStore: Return inference result
InferenceStore-->>Client: Return result
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Summary by CodeRabbit